Nonlinear Knowledge in Kernel Approximation
نویسندگان
چکیده
منابع مشابه
Knowledge-Based Kernel Approximation
Prior knowledge, in the form of linear inequalities that need to be satisfied over multiple polyhedral sets, is incorporated into a function approximation generated by a linear combination of linear or nonlinear kernels. In addition, the approximation needs to satisfy conventional conditions such as having given exact or inexact function values at certain points. Determining such an approximati...
متن کاملNonlinear Knowledge in Kernel Machines
We give a unified presentation of recent work in applying prior knowledge to nonlinear kernel approximation [MW05] and nonlinear kernel classification [MW06]. In both approaches, prior knowledge over general nonlinear sets is incorporated into nonlinear kernel approximation or classification problems as linear constraints in a linear program. The key tool in this incorporation is a theorem of t...
متن کاملKnowledge-Based Nonlinear Kernel Classifiers
Prior knowledge in the form of multiple polyhedral sets, each belonging to one of two categories, is introduced into a reformulation of a nonlinear kernel support vector machine (SVM) classifier. The resulting formulation leads to a linear program that can be solved efficiently. This extends, in a rather unobvious fashion, previous work [3] that incorporated similar prior knowledge into a linea...
متن کاملMemory Efficient Kernel Approximation
Scaling kernel machines to massive data sets is a major challenge due to storage and computation issues in handling large kernel matrices, that are usually dense. Recently, many papers have suggested tackling this problem by using a low-rank approximation of the kernel matrix. In this paper, we first make the observation that the structure of shift-invariant kernels changes from low-rank to blo...
متن کاملKernel knowledge
The journal Nature demonstrated two principles of journalism recently. The first is that theories about mutant corn plants running amok can garner a great deal of attention. The second is that you can get even more coverage when you declare that you made a mistake and shouldn't have published the paper in the first place. This drama played out over a paper by researchers at UC Berkeley, who rep...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Neural Networks
سال: 2007
ISSN: 1045-9227
DOI: 10.1109/tnn.2006.886354